# 4bit quantized inference
SWE Agent LM 32B 4bit
Apache-2.0
This is a 4-bit quantized version converted from the SWE-bench/SWE-agent-LM-32B model, specifically optimized for software engineering tasks.
Large Language Model
Transformers English

S
mlx-community
31
1
Midnight Miqu 70B V1.5 4bit
Midnight-Miqu-70B-v1.5 is a large language model with 70B parameters, supporting tasks such as text generation.
Large Language Model
Transformers

M
cecibas
361.62k
3
Gpt4 X Alpaca 13b Native 4bit 128g
A 13B-parameter language model fine-tuned with GPT4 and Alpaca instructions, supporting 4bit quantized inference
Large Language Model
Transformers

G
anon8231489123
344
736
Featured Recommended AI Models